53 research outputs found

    Breaking Cuckoo Hash: Black Box Attacks

    Get PDF
    Introduced less than twenty years ago, cuckoo hashing has a number of attractive features like a constant worst case number of memory accesses for queries and close to full memory utilization. Cuckoo hashing has been widely adopted to perform exact matching of an incoming key with a set of stored (key, value) pairs in both software and hardware implementations. This widespread adoption makes it important to consider the security of cuckoo hashing. Most hash based data structures can be attacked by generating collisions that reduce their performance. In fact, for cuckoo hashing collisions could lead to insertion failures which in some systems would lead to a system failure. For example, if cuckoo hashing is used to perform Ethernet lookup and a given MAC address cannot be added to the cuckoo hash, the switch would not be able to correctly forward frames to that address. Previous works have shown that this can be done when the attacker knows the hash functions used in the implementation. However, in many cases the attacker would not have that information and would only have access to the cuckoo hash operations to perform insertions, removals or queries. This article considers the security of a cuckoo hash to an attacker that has only a black box access to it. The analysis shows that by carefully performing user operations on the cuckoo hash, the attacker can force insertion failures with a small set of elements. The proposed attack has been implemented and tested for different configurations to demonstrate its feasibility. The fact that cuckoo hash can be broken with only access to its user functions should be taken into account when implementing it in critical systems. The article also discusses potential approaches to mitigate this vulnerability.This work was supported by the ACHILLES project (PID2019-104207RB-I00) and the Go2Edge network (RED2018-102585-T) funded by the Spanish Ministry of Science and Innovation and by the Madrid Community project TAPIR-CM (P2018/TCS-4496).Publicad

    Selective Neuron Re-Computation (SNRC) for Error-Tolerant Neural Networks

    Get PDF
    Artificial Neural networks (ANNs) are widely used to solve classification problems for many machine learning applications. When errors occur in the computational units of an ANN implementation due to for example radiation effects, the result of an arithmetic operation can be changed, and therefore, the predicted classification class may be erroneously affected. This is not acceptable when ANNs are used in many safety-critical applications, because the incorrect classification may result in a system failure. Existing error-tolerant techniques usually rely on physically replicating parts of the ANN implementation or incurring in a significant computation overhead. Therefore, efficient protection schemes are needed for ANNs that are run on a processor and used in resource-limited platforms. A technique referred to as Selective Neuron Re-Computation (SNRC), is proposed in this paper. As per the ANN structure and algorithmic properties, SNRC can identify the cases in which the errors have no impact on the outcome; therefore, errors only need to be handled by re-computation when the classification result is detected as unreliable. Compared with existing temporal redundancy-based protection schemes, SNRC saves more than 60 percent of the re-computation (more than 90 percent in many cases) overhead to achieve complete error protection as assessed over a wide range of datasets. Different activation functions are also evaluated.This research was supported by the National Science Foundation Grants CCF-1953961 and 1812467, by the ACHILLES project PID2019-104207RB-I00 and the Go2Edge network RED2018-102585-T funded by the Spanish Ministry of Science and Innovation and by the Madrid Community research project TAPIR-CM P2018/TCS-4496.Publicad

    Codes for Limited Magnitude Error Correction in Multilevel Cell Memories

    Get PDF
    Multilevel cell (MLC) memories have been advocated for increasing density at low cost in next generation memories. However, the feature of several bits in a cell reduces the distance between levels; this reduced margin makes such memories more vulnerable to defective phenomena and parameter variations, leading to an error in stored data. These errors typically are of limited magnitude, because the induced change causes the stored value to exceed only a few of the level boundaries. To protect these memories from such errors and ensure that the stored data is not corrupted, Error Correction Codes (ECCs) are commonly used. However, most existing codes have been designed to protect memories in which each cell stores a bit and thus, they are not efficient to protect MLC memories. In this paper, an efficient scheme that can correct up to magnitude-3 errors is presented and evaluated. The scheme is based by combining ECCs that are commonly used to protect traditional memories. In particular, Interleaved Parity (IP) bits and Single Error Correction and Double Adjacent Error Correction (SEC-DAEC) codes are utilized; both these codes are combined in the proposed IP-DAEC scheme to efficiently provide a strong coding function for correction, thus exceeding the capabilities of most existing coding schemes for limited magnitude errors. The SEC-DAEC code is used to detect the cell in error and correct some bits, while the IP bits identify the remaining erroneous bits in the memory cell. The use of these simple codes results in an efficient implementation of the decoder compared to existing techniques as shown by the evaluation results presented in this paper. The proposed scheme is also competitive in terms of number of parity check bits and memory redundancy. Therefore, the proposed IP-DAEC scheme is a very efficient alternative to protect and correct MLC memories from limited magnitude errors.Pedro Reviriego was partially supported by the TEXEO project (TEC2016-80339-R) funded by the Spanish Research Plan and by the Madrid Community research project TAPIR-CM grant no. P2018/TCS-4496

    Latin and Greek in Computing: Ancient Words in a New World

    Get PDF
    In computing, old words are reused by giving them new meanings. However, many people may not be aware of the origin of ancient words that have found new uses in computing. We advocate for basic computing science and engineering courses to cover the origins of the most common ancient words used in computing.We thank Mariano Arnal for his comments and suggestions on words and their origins. This work was supported in part by the ACHILLES project PID2019-104207RB-I00 funded by the Spanish Agencia Estatal de Investigación 10.13039/501100011033. (We can see that Greek mythology also influences researchers when naming their projects. In our case, we took the name from the famous character in Homers' Iliad.

    Toward a Fault-Tolerant Star Tracker for Small Satellite Applications

    Get PDF
    Star trackers are autonomous, high-accuracy electronic systems used to determine the attitude of a spacecraft. In recent years, commercial-off-the-shelf (COTS)-based star trackers are growing in importance for low-cost and short-duration missions, but their fault tolerance against soft errors has not been studied in detail. In this paper, we propose a self-healing system protected with ad hoc techniques that can be used as the first step to implement a fault-tolerant COTS-based star tracker for smallsat applications.The authors would like to thank E. Palombo from ESA/ESTEC in Noordwijk, the Netherlands, for providing the star tracker images used in this paper

    Protecting Memories against Soft Errors: The Case for Customizable Error Correction Codes

    Get PDF
    As technology scales, radiation induced soft errors create more complex error patterns in memories with a single particle corrupting several bits. This poses a challenge to the Error Correction Codes (ECCs) traditionally used to protect memories that can correct only single bit errors. During the last decade, a number of codes have been developed to correct the emerging error patterns, focusing initially on double adjacent errors and later on three bit burst errors. However, as the memory cells get smaller and smaller, the error patterns created by radiation will continue to change and thus new codes will be needed. In addition, the memory layout and the technology used may also make some patterns more likely than others. For example, in some memories, there maybe elements that separate blocks of bits in a word, making errors that affect two blocks less likely. Finally, for a given memory, depending on the data stored, some error patterns may be more critical than others. For example, if numbers are stored in the memory, in most cases, errors on the more significant bits have a larger impact. Therefore, for a given memory and application, to achieve optimal protection, we would like to have a code that corrects a given set of patterns. This is not possible today as there is a limited number of code choices available in terms of correctable error patterns and word lengths. However, most of the codes used to protect memories are linear block codes that have a regular structure and which design can be automated. In this paper, we propose the automation of error correction code design for memory protection. To that end, we introduce a software tool that given a word length and the error patterns that need to be corrected, produces a linear block code described by its parity check matrix and also the bit placement. The benefits of this automated design approach are illustrated with several case studies. Finally, the tool is made available so that designers can easily produce custom error correction codes for their specific needs.Jiaqiang Li and Liyi Xiao would like to acknowledge the support of the Fundamental Research Funds for the Central Universities (Grant No. HIT.KISTP.201404), Harbin science and innovation research special fund (2015RAXXJ003), and Special found for development of Shenzhen strategic emerging industries (JCYJ20150625142543456). Pedro Reviriego would like to acknowledge the support of the TEXEO project TEC2016-80339-R funded by the Spanish Ministry of Economy and Competitivity and of the Madrid Community research project TAPIR-CM Grant No. P2018/TCS-4496

    Correction Masking: a technique to implement efficient SET tolerant error correction decoders

    Get PDF
    Single Event Transients (SETs) can be a major concern for combinational circuits. Its importance grows as technology scales because a small charge can create a large disturbance on a circuit node. One example of circuits that can suffer from SETs is the decoders of the Error Correction Codes (ECCs) that are used to protect memories from errors. This paper presents Correction Masking (CM), a technique to implement SET tolerant syndrome decoders. The proposed technique is presented and evaluated both in terms of protection effectiveness and circuit overhead. The results show that it can provide an effective protection while reducing the circuit area and power significantly compared to a Triple Modular Redundancy (TMR) protection. An interesting result is that Correction Masking also reduces the delay as it adds less logic in the critical path than TMR. Finally, the proposed technique can be used for any syndrome decoder. This means that it is applicable to many of the ECCs used to protect memories such as Single Error Correction (SEC), Single Error Correction Double Error Detection (SEC-DED), Single Error Correction Double Adjacent Error Correction (SEC-DAEC), and 3-bit burst codes.The work of Pedro Reviriego was supported in part by the Spanish Agencia Estatal de Investigación (AEI) 10.13039/501100011033 through the ACHILLES Project under Grant PID2019-104207RB-I00 and the Go2Edge Network under Grant RED2018-102585-T, and in part by the Madrid Community Research Project TAPIR-CM under Grant P2018/TCS-4496.Publicad

    Avoiding Flow Size Overestimation in the Count-Min Sketch with Bloom Filter Constructions

    Get PDF
    The Count-Min sketch is the most popular data structure for flow size estimation, a basic measurement task required in many networks. Typically the number of potential flows is large, eliminating the possibility to maintain a counter per flow within memory of high access rate. The Count-Min sketch is probabilistic and relies on mapping each flow to multiple counters through hashing. This implies potential estimation error such that the size of a flow is overestimated when all flow counters are shared with other flows with observed traffic. Although the error in the estimation can be probabilistically bounded, many applications can benefit from accurate flow size estimation and the guarantee to completely avoid overestimation. We describe a design of the Count-Min sketch with accurate estimations whenever the number of flows with observed traffic follows a known bound, regardless of the identity of these particular flows. We make use of a concept of Bloom filters that avoid false positives and indicate the limitations of existing Bloom filter designs towards accurate size estimation. We suggest new Bloom filter constructions that allow scalability with the support for a larger number of flows and explain how these can imply the unique guarantee of accurate flow size estimation in the well known Count-Min sketch.Ori Rottenstreich was partially supported by the German-Israeli Foundation for Scientic Research and Development (GIF), by the Gordon Fund for System Engineering as well as by the Technion Hiroshi Fujiwara Cyber Security Research Center and the Israel National Cyber Directorate. Pedro Reviriego would like to acknowledge the sup-port of the ACHILLES project PID2019-104207RB-I00 and the Go2Edge network RED2018-102585-T funded by the Spanish Ministry of Science and Innovation and of the Madrid Community research project TAPIR-CM grant no. P2018/TCS-4496

    On the Security of the K Minimum Values (KMV) Sketch

    Get PDF
    Data sketches are widely used to accelerate operations in big data analytics. For example, algorithms use sketches to compute the cardinality of a set, or the similarity between two sets. Sketches achieve significant reductions in computing time and storage requirements by providing probabilistic estimates rather than exact values. In many applications, an estimate is sufficient and thus, it is possible to trade accuracy for computational complexity; this enables the use of probabilistic sketches. However, the use of probabilistic data structures may create security issues because an attacker may manipulate the data in such a way that the sketches produce an incorrect estimate. For example, an attacker could potentially inflate the estimate of the number of distinct users to increase its revenues or popularity. Recent works have shown that an attacker can manipulate Hyperloglog, a sketch widely used for cardinality estimate, with no knowledge of its implementation details. This paper considers the security of K Minimum Values (KMV), a sketch that is also widely used to implement both cardinality and similarity estimates. Next sections characterize vulnerabilities at an implementationindependent level, with attacks formulated as part of a novel adversary model that manipulates the similarity estimate. Therefore, the paper pursues an analysis and simulation; the results suggest that as vulnerable to attacks, an increase or reduction of the estimate may occur. The execution of the attacks against the KMV implementation in the Apache DataSketches library validates these scenarios. Experiments show an excellent agreement between theory and experimental results.Pedro Reviriego acknowledges the support of the ACHILLES project PID2019-104207RB-I00 and the Go2Edge network RED2018-102585-T funded by the Spanish Ministry of Economy and Competitivity and of the Madrid Community research project TAPIR-CM under Grant P2018/TCS-4496

    Switch based high cardinality node detection

    Get PDF
    The detection of supernodes with high cardinality is of interest for network monitoring and security. Existing schemes for supernode detection rely on data structures that are independent of the switching functions. This means that for each packet that traverses the switch, both the switching table and the supernode detection structure have to be checked which requires significant memory bandwidth. This can create a bottleneck and reduce the speed of the switch, especially for software implementations. In this letter, a scheme that performs supernode detection as part of Ethernet switching and does not require additional memory accesses nor separated data structures is presented. The scheme has been implemented and compared with the existing methods. The results show that the proposed scheme can reliably identify supernodes while providing a speed up of more than 15% when compared with the existing solutions.This work was supported in part by the Higher Education Commission (HEC) Pakistan and the Ministry of Planning, Development and Special Initiatives under National Centre for Cyber Security; in part by the ACHILLES Project PID2019-104207RB-I00 and the Go2Edge network RED2018-102585-T funded by the Spanish Ministry of Science and Innovation; and in part by the Madrid Community Research Project TAPIR-CM under Grant P2018/TCS4496
    corecore